1,123 research outputs found

    Regularized Newton Methods for X-ray Phase Contrast and General Imaging Problems

    Full text link
    Like many other advanced imaging methods, x-ray phase contrast imaging and tomography require mathematical inversion of the observed data to obtain real-space information. While an accurate forward model describing the generally nonlinear image formation from a given object to the observations is often available, explicit inversion formulas are typically not known. Moreover, the measured data might be insufficient for stable image reconstruction, in which case it has to be complemented by suitable a priori information. In this work, regularized Newton methods are presented as a general framework for the solution of such ill-posed nonlinear imaging problems. For a proof of principle, the approach is applied to x-ray phase contrast imaging in the near-field propagation regime. Simultaneous recovery of the phase- and amplitude from a single near-field diffraction pattern without homogeneity constraints is demonstrated for the first time. The presented methods further permit all-at-once phase contrast tomography, i.e. simultaneous phase retrieval and tomographic inversion. We demonstrate the potential of this approach by three-dimensional imaging of a colloidal crystal at 95 nm isotropic resolution.Comment: (C)2016 Optical Society of America. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modifications of the content of this paper are prohibite

    Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

    Full text link
    Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband

    Probabilistic Linear Solvers: A Unifying View

    Full text link
    Several recent works have developed a new, probabilistic interpretation for numerical algorithms solving linear systems in which the solution is inferred in a Bayesian framework, either directly or by inferring the unknown action of the matrix inverse. These approaches have typically focused on replicating the behavior of the conjugate gradient method as a prototypical iterative method. In this work surprisingly general conditions for equivalence of these disparate methods are presented. We also describe connections between probabilistic linear solvers and projection methods for linear systems, providing a probabilistic interpretation of a far more general class of iterative methods. In particular, this provides such an interpretation of the generalised minimum residual method. A probabilistic view of preconditioning is also introduced. These developments unify the literature on probabilistic linear solvers, and provide foundational connections to the literature on iterative solvers for linear systems

    Probabilistic Linear Algebra

    Get PDF
    Linear algebra operations are at the core of many computational tasks. For example, evaluating the density of a multivariate normal distribution requires the solution of a linear equation system and the determinant of a square matrix. Frequently, and in particular in machine learning, the size of the involved matrices is too large to compute exact solutions, and necessitate approximation. Building upon recent work (Hennig and Kiefel 2012; Hennig 2015) this thesis considers numerical linear algebra from a probabilistic perspective. Part iii establishes connections between approximate linear solvers and Gaussian inference, with a focus on projection methods. One result is the observation that solution-based inference (Cockayne, Oates, Ipsen, and Girolami 2018) is subsumed in the matrix-based inference perspective (Hennig and Kiefel 2012). Part iv shows how the probabilistic viewpoint leads to a novel algorithm for kernel least-squares problems. A Gaussian model over kernel functions is proposed that uses matrix-multiplications computed by conjugate gradients to obtain a low-rank approximation of the kernel. The derived algorithm kernel machine conjugate gradients provides empirically better approximations than conjugate gradients and, when used for Gaussian process regression, additionally provides estimates for posterior variance and log-marginal likelihood, without the need to rerun. Part v is concerned with the approximation of kernel matrix determinants. Assuming the inputs to the kernel are independent and identically distributed, a stopping condition for the Cholesky decomposition is presented that provides probably approximately correct (PAC) estimates of the log-determinant with only little overhead

    On the Groundstate of Yang-Mills Quantum Mechanics

    Get PDF
    A systematic method to calculate the low energy spectrum of SU(2) Yang-Mills quantum mechanics with high precision is given and applied to obtain the energies of the groundstate and the first few excited states.Comment: 4 pages REVTEX twocolumn, no figures; important calculational mistake corrected which considerably changes the conclusions; references adde

    The Dipole Picture and Saturation in Soft Processes

    Get PDF
    We attempt to describe soft hadron interactions in the framework of saturation models, one based upon the Balitsky-Kovchegov non-linear equation and another one due to Golec-Biernat and W\"{u}sthoff. For pppp, KpKp, and πp\pi p scattering the relevant hadronic wave functions are formulated, and total, elastic cross-sections, and the forward elastic slope are calculated and compared to experimental data. The saturation mechanism leads to reasonable reproduction of the data for the quantities analyzed, except for the forward elastic slope, where the predicted increase with energy is too moderate.Comment: 12 pages, 4 figures. One figure and several explanations are added. The version is to appear in PL

    A Generational Model of Political Learning

    Get PDF
    We propose a mathematical framework for modeling opinion change using large-scale longitudinal data sets. Our framework encompasses two varieties of Bayesian learning theory as well as Mannheim's theory of generational responses to political events. The basic assumptions underlying the model are (1) that historical periods are characterized by shocks to existing political opinions, and (2) that individuals of different ages may attach different weights to those political shocks. Political generations emerge endogenously from these basic assumptions: the political views of identifiable birth cohorts differ, and evolve distinctively through time, due to the interaction of age-specific weights with period-specific shocks. We employ this model to examine generational changes in party identification using survey data from the American National Election Studies

    Svitchboard 1: Small vocabulary tasks from switchboard 1

    Get PDF
    We present a conversational telephone speech data set designed to support research on novel acoustic models. Small vocabulary tasks from 10 words up to 500 words are defined using subsets of the Switchboard-1 corpus; each task has a completely closed vocabulary (an OOV rate of 0%). We justify the need for these tasks, describe the algorithm for selecting them from a large corpus, give a statistical analysis of the data and present baseline whole-word hidden Markov model recognition results. The goal of the paper is to define a common data set and to encourage other researchers to use it

    Lattice Supersymmetry and Topological Field Theory

    Get PDF
    It is known that certain theories with extended supersymmetry can be discretized in such a way as to preserve an exact fermionic symmetry. In the simplest model of this kind, we show that this residual supersymmetric invariance is actually a BRST symmetry associated with gauge fixing an underlying local shift symmetry. Furthermore, the starting lattice action is then seen to be entirely a gauge fixing term. The corresponding continuum theory is known to be a topological field theory. We look, in detail, at one example - supersymmetric quantum mechanics which possesses two such BRST symmetries. In this case, we show that the lattice theory can be obtained by blocking out of the continuum in a carefully chosen background metric. Such a procedure will not change the Ward identities corresponding to the BRST symmetries since they correspond to topological observables. Thus, at the quantum level, the continuum BRST symmetry is preserved in the lattice theory. Similar conclusions are reached for the two-dimensional complex Wess-Zumino model and imply that all the supersymmetric Ward identities are satisfied {\it exactly} on the lattice. Numerical results supporting these conclusions are presented.Comment: 18 pages, 2 figure

    Manipulation Planning and Control for Shelf Replenishment

    Get PDF
    Manipulation planning and control are relevant building blocks of a robotic system and their tight integration is a key factor to improve robot autonomy and allows robots to perform manipulation tasks of increasing complexity, such as those needed in the in-store logistics domain. Supermarkets contain a large variety of objects to be placed on the shelf layers with specific constraints, doing this with a robot is a challenge and requires a high dexterity. However, an integration of reactive grasping control and motion planning can allow robots to perform such tasks even with grippers with limited dexterity. The main contribution of the paper is a novel method for planning manipulation tasks to be executed using a reactive control layer that provides more control modalities, i.e., slipping avoidance and controlled sliding. Experiments with a new force/tactile sensor equipping the gripper of a mobile manipulator show that the approach allows the robot to successfully perform manipulation tasks unfeasible with a standard fixed grasp.Comment: 8 pages, 12 figures, accepted at RA
    corecore